Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
2022 IEEE International Ultrasonics Symposium, IUS 2022 ; 2022-October, 2022.
Article in English | Scopus | ID: covidwho-2191975

ABSTRACT

Deep learning has been implemented to detect COVID-19 features in lung ultrasound B-mode images. However, previous work primarily relied on in vivo images as the training data, which suffers from limited access to required manual labeling of thousands of training image examples. To avoid this manual labeling, which is tedious and time consuming, we propose the detection of in vivo COVID-19 features (i.e., A-line, B-line, consolidation) with deep neural networks (DNNs) trained on simulated B-mode images. The simulation-trained DNNs were tested on in vivo B-mode images from healthy subjects and COVID-19 patients. With data augmentation included during the training process, Dice similarity coefficients (DSCs) between ground truth and DNN predictions were maximized, producing mean ± standard deviatio values as high as 0.48 ± 0.29, 0.45 ± 0.25, and 0.46 ± 0.35 when segmenting in vivo A-line, B-line, and consolidation features, respectively. Results demonstrate that simulation-trained DNNs are a promising alternative to training with real patient data when segmenting in vivo COVID-19 features. © 2022 IEEE.

2.
Medical Imaging 2022: Computer-Aided Diagnosis ; 12033, 2022.
Article in English | Scopus | ID: covidwho-1923072

ABSTRACT

COVID-19 is a highly infectious disease with high morbidity and mortality, requiring tools to support rapid triage and risk stratification. In response, deep learning has demonstrated great potential to quicklyand autonomously detect COVID-19 features in lung ultrasound B-mode images. However, no previous work considers the application of these deep learning models to signal processing stages that occur prior to traditional ultrasound B-mode image formation. Considering the multiple signal processing stages required to achieve ultrasound B-mode images, our research objective is to investigate the most appropriate stage for our deep learning approach to COVID-19 B-line feature detection, starting with raw channel data received by an ultrasound transducer. Results demonstrate that for our given training and testing configuration, the maximum Dice similarity coefficient (DSC) was produced by B-mode images (DSC = 0.996) when compared with three alternative image formation stages that can serve as network inputs: (1) raw in-phase and quadrature (IQ) data before beamforming, (2) beamformed IQ data, (3) envelope detected IQ data. The best-performing simulation-trained network was tested on in vivo B-mode images of COVID-19 patients, ultimately achieving 76% accuracy to detect the same (82% of cases) or more (18% of cases) B-line features when compared to B-line feature detection by human observers interpreting B-mode images. Results are promising to proceed with future COVID-19 B-line feature detection using ultrasound B-mode images as the input to deep learning models. © 2022 SPIE.

SELECTION OF CITATIONS
SEARCH DETAIL